Path: blob/master/Part 9 - Dimension Reduction/Kernel PCA/[Python] Kernel PCA.ipynb
1336 views
Kernel: Python 3
Kernel PCA
Data preprocessing
In [1]:
In [2]:
Out[2]:
In [3]:
Out[3]:
array([[ 1.90000000e+01, 1.90000000e+04],
[ 3.50000000e+01, 2.00000000e+04],
[ 2.60000000e+01, 4.30000000e+04],
[ 2.70000000e+01, 5.70000000e+04],
[ 1.90000000e+01, 7.60000000e+04],
[ 2.70000000e+01, 5.80000000e+04],
[ 2.70000000e+01, 8.40000000e+04],
[ 3.20000000e+01, 1.50000000e+05],
[ 2.50000000e+01, 3.30000000e+04],
[ 3.50000000e+01, 6.50000000e+04]])
In [4]:
Out[4]:
array([0, 0, 0, 0, 0, 0, 0, 1, 0, 0])
In [5]:
In [6]:
In [7]:
Out[7]:
array([[-1.06675246, -0.38634438],
[ 0.79753468, -1.22993871],
[ 0.11069205, 1.853544 ],
[ 0.60129393, -0.90995465],
[ 1.87685881, -1.28811763],
[-0.57615058, 1.44629156],
[ 0.3069328 , -0.53179168],
[ 0.99377543, 0.10817643],
[-1.16487283, 0.45724994],
[-1.55735433, 0.31180264]])
Applying Kernel PCA
In [8]:
In [9]:
Out[9]:
array([[-0.53003086, -0.35781602],
[ 0.02840279, 0.53035218],
[ 0.62073943, -0.40226247],
[-0.05708502, 0.60853904],
[ 0.29502125, 0.24263861],
[ 0.39575077, -0.50069924],
[-0.16536697, 0.55573179],
[ 0.38535393, 0.43433109],
[-0.1775384 , -0.57558876],
[-0.21299263, -0.61901708]])
In [10]:
Out[10]:
array([[ 0.03124483, 0.46725717],
[ 0.50049287, 0.04051636],
[-0.58799448, -0.2242753 ],
[ 0.59745597, 0.10840843],
[-0.53486425, -0.1218985 ],
[-0.49000067, -0.2156769 ],
[ 0.66353176, -0.05595593],
[-0.20138076, -0.4484187 ],
[ 0.0499621 , 0.29711545],
[ 0.01620058, 0.50016371]])
Fitting Logistic Regression to the Training Set
In [11]:
Out[11]:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='ovr', n_jobs=1,
penalty='l2', random_state=42, solver='liblinear', tol=0.0001,
verbose=0, warm_start=False)
Predicting Test set result
In [12]:
In [13]:
Out[13]:
array([0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0])
In [14]:
Out[14]:
array([0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0])
This prediction looks good.
Making the Confusion Matrix
In [15]:
In [16]:
Out[16]:
array([[48, 4],
[ 6, 22]])
classifier made 48 + 22 = 70 correct prediction and 6 + 4 = 10 incoreect predictions.
Visualizing the training set results
In [17]:
In [18]:
In [19]:
In [20]:
Out[20]:
<matplotlib.legend.Legend at 0x7f917c8a6a90>
Visualizing the test set results
In [21]:
Out[21]:
<matplotlib.legend.Legend at 0x7f917c882c50>